Goto

Collaborating Authors

 mental health intervention


Applying and Evaluating Large Language Models in Mental Health Care: A Scoping Review of Human-Assessed Generative Tasks

Hua, Yining, Na, Hongbin, Li, Zehan, Liu, Fenglin, Fang, Xiao, Clifton, David, Torous, John

arXiv.org Artificial Intelligence

Large language models (LLMs) are emerging as promising tools for mental health care, offering scalable support through their ability to generate human-like responses. However, the effectiveness of these models in clinical settings remains unclear. This scoping review aimed to assess the current generative applications of LLMs in mental health care, focusing on studies where these models were tested with human participants in real-world scenarios. A systematic search across APA PsycNet, Scopus, PubMed, and Web of Science identified 726 unique articles, of which 17 met the inclusion criteria. These studies encompassed applications such as clinical assistance, counseling, therapy, and emotional support. However, the evaluation methods were often non-standardized, with most studies relying on ad hoc scales that limit comparability and robustness. Privacy, safety, and fairness were also frequently underexplored. Moreover, reliance on proprietary models, such as OpenAI's GPT series, raises concerns about transparency and reproducibility. While LLMs show potential in expanding mental health care access, especially in underserved areas, the current evidence does not fully support their use as standalone interventions. More rigorous, standardized evaluations and ethical oversight are needed to ensure these tools can be safely and effectively integrated into clinical practice.


PhD Candidate in Machine Learning for Understanding Mental Health Interventions

#artificialintelligence

PhD Candidate in Machine Learning for Understanding Mental Health Interventions in Computer Science, Academic Posts with NORWEGIAN UNIVERSITY OF …


Would you trust AI with your mental health?

#artificialintelligence

How on earth could a non-living digital device be of value to any human being experiencing mental health issues? Can a person really develop a sense of trust with AI like they might have with another person? Or, even more subtlety, what if over time the artificial intelligence, or AI, is developing algorithms and drawing conclusions about people's mental health that are incorrect, biased or even discriminatory? Currently, countries all around the world are grappling with the high prevalence of mental health disorders and their enormous impact on people's day-to-day lives, the community, and society as a whole. The truth is that mental health workforces, even when well-funded and supported in developed nations, aren't able to keep up with the demand.